Tinymistral 6x248M Instruct
Apache-2.0
A language model fine-tuned based on the Mixture of Experts (MoE) architecture, which fuses multiple models through the LazyMergekit framework and performs excellently in instruction tasks.
Large Language Model
Transformers English